--- title: Calibration Procedure keywords: fastai sidebar: home_sidebar summary: "Every OpenHSI camera is unique and requires calibration before use. This module provides the abstractions to create the calibration data which are then used in operation. " description: "Every OpenHSI camera is unique and requires calibration before use. This module provides the abstractions to create the calibration data which are then used in operation. " nb_path: "nbs/05_calibrate.ipynb" ---
{% raw %}
/Users/eway/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pandas/compat/__init__.py:97: UserWarning: Could not import the lzma module. Your installed Python is incomplete. Attempting to use lzma compression will result in a RuntimeError.
  warnings.warn(msg)
{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %} {% raw %}

sum_gaussians[source]

sum_gaussians(x:array, *args:amplitude, peak position, peak width, constant)

{% endraw %} {% raw %}
{% endraw %} {% raw %}

class SettingsBuilderMixin[source]

SettingsBuilderMixin()

{% endraw %} {% raw %}
{% endraw %} {% raw %}

class SettingsBuilderMetaclass[source]

SettingsBuilderMetaclass(clsname:str, cam_class, attrs) :: type

type(object_or_name, bases, dict)
type(object) -> the object's type
type(name, bases, dict) -> a new type
{% endraw %} {% raw %}

create_settings_builder[source]

create_settings_builder(clsname:str, cam_class:Camera Class)

Create a `SettingsBuilder` class called `clsname` based on your chosen `cam_class`.
{% endraw %} {% raw %}
{% endraw %}

There are two ways to create a SettingsBuilder class that words for your custom camera. (They involve Python metaclasses and mixins)

For example, you can then create a SettingsBuilder class that words for your custom camera by doing the following.

SettingsBuilder = create_settings_builder("SettingsBuilder",SimulatedCamera)
sb = SettingsBuilder(json_path="../assets/cam_settings.json",pkl_path="../assets/cam_calibration.pkl")
sb = SettingsBuilder(json_path="../assets/cam_settings.json",pkl_path="../assets/cam_calibration.pkl")

sb.update_intsphere_fit()
# other calibration functions...

sb.dump()
{% raw %}
#hide_output

json_path='../cals/cam_settings_lucid_214302972.json'
pkl_path="../cals/openhsi_214302972.pkl"

SettingsBuilder = create_settings_builder("SettingsBuilder",LucidCamera)
#sb = SettingsBuilder(json_path="../assets/cam_settings.json",pkl_path="../assets/cam_calibration.pkl")
sb = SettingsBuilder(json_path=json_path,
                     pkl_path=pkl_path,
                     processing_lvl=-1,
                     pixel_format='Mono8',
#                      binxy=[2, 2],
#                      resolution=[540,720]
                    )
{% endraw %}

Find illuminated sensor area

Since the longer dimension is used for the spectral channels, the rows correspond to the cross-track dimension and are limited by the optics (slit). The useable area is cropped out.

{% raw %}

SettingsBuilderMixin.retake_flat_field[source]

SettingsBuilderMixin.retake_flat_field(show:bool=False)

SettingsBuilderMixin.update_row_minmax[source]

SettingsBuilderMixin.update_row_minmax(edgezone:int=4)

{% endraw %} {% raw %}
hvimg=sb.retake_flat_field(show=True)
hvimg.opts(width=800,height=800)
print(sb.calibration["flat_field_pic"].max())
hvimg
{% endraw %} {% raw %}
sb.update_row_minmax()
{% endraw %} {% raw %}
sb.update_resolution()
{% endraw %}

Smile Correction

The emissions lines, which should be straight vertical, appear slightly curved. This is smile error (error in the spectral dimension).

{% raw %}

SettingsBuilderMixin.retake_HgAr[source]

SettingsBuilderMixin.retake_HgAr(show:bool=False, numframes:int=10)

SettingsBuilderMixin.update_smile_shifts[source]

SettingsBuilderMixin.update_smile_shifts()

{% endraw %} {% raw %}
hvimg=sb.retake_HgAr(show=True)
hvimg.opts(width=800,height=800)
print(sb.calibration["HgAr_pic"].max())
hvimg
{% endraw %} {% raw %}
sb.update_smile_shifts()
{% endraw %}

Map the spectral axis to wavelengths

To do this, peaks in the HgAr spectrum are found, refined by curve-fitting with Gaussians. The location of the peaks then allow for interpolation to get the map from array (column) index to wavelength (nm).

{% raw %}

SettingsBuilderMixin.fit_HgAr_lines[source]

SettingsBuilderMixin.fit_HgAr_lines(top_k:int=10, brightest_peaks:list=[435.833, 546.074, 763.511], filter_window:int=1, interactive_peak_id:bool=False, find_peaks_height:int=10, prominence=0.2, width=1.5)

Finds the index to wavelength map given a spectra and a list of emission lines.
To filter the spectra, set `filter_window` to an odd number > 1.
{% endraw %} {% raw %}
sb.fit_HgAr_lines(top_k=10)
{% endraw %}

Each column in our camera frame (after smile correction) corresponds to a particular wavelength. The interpolation between column index and wavelength is slightly nonlinear which is to be expected from the diffraction grating - however it is linear to good approximation. Applying a linear interpolation gives an absolute error of $\pm$3 nm whereas the a cubic interpolation used here gives an absolute error of $\pm$ 0.3 nm (approximately the spacing between each column). Using higher order polynomials doesn't improve the error due to overfitting.

For fast real time processing, the fast binning procedure assumes a linear interpolation because the binning algorithm consists of a single broadcasted summation with no additional memory allocation overhead. A slower more accurate spectral binning procedure is also provided using the cubic interpolation described here and requires hundreds of temporary arrays to be allocated each time. Binning can also be done in post processing after collecting raw data.

{% raw %}

SettingsBuilderMixin.update_intsphere_fit[source]

SettingsBuilderMixin.update_intsphere_fit(spec_rad_ref_data='../assets/112704-1-1_1nm_data.csv', spec_rad_ref_luminance:int=52020)

{% endraw %} {% raw %}
fig = sb.update_intsphere_fit()
{% endraw %} {% raw %}
 
{% endraw %}

Integrating Sphere data

4D datacube with coordinates of cross-track, wavelength, exposure, and luminance.

Needs testing!

SpectraPT TCP CLient

Class to interact with the spectra pt

{% raw %}

class SpectraController[source]

SpectraController(lum_preset_dict:Dict[int, int]={0: 1, 1000: 2, 2000: 3, 3000: 4, 4000: 5, 5000: 6, 6000: 7, 7000: 8, 8000: 9, 9000: 10, 10000: 11, 20000: 12, 25000: 13, 30000: 14, 35000: 15, 40000: 16}, host:str='localhost', port:int=3434)

{% endraw %} {% raw %}
{% endraw %}